首页> 外文OA文献 >Improving Efficiency in Convolutional Neural Network with Multilinear Filters
【2h】

Improving Efficiency in Convolutional Neural Network with Multilinear Filters

机译:用多线性方法提高卷积神经网络的效率   过滤器

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The excellent performance of deep neural networks has enabled us to solveseveral automatization problems, opening an era of autonomous devices. However,current deep net architectures are heavy with millions of parameters andrequire billions of floating point operations. Several works have beendeveloped to compress a pre-trained deep network to reduce memory footprintand, possibly, computation. Instead of compressing a pre-trained network, inthis work, we propose a generic neural network layer structure employingmultilinear projection as the primary feature extractor. The proposedarchitecture requires several times less memory as compared to the traditionalConvolutional Neural Networks (CNN), while inherits the similar designprinciples of a CNN. In addition, the proposed architecture is equipped withtwo computation schemes that enable computation reduction or scalability.Experimental results show the effectiveness of our compact projection thatoutperforms traditional CNN, while requiring far fewer parameters.
机译:深度神经网络的出色性能使我们能够解决多个自动化问题,从而开启了自动设备的时代。但是,当前的深层网络架构具有数百万个参数,并且需要数十亿个浮点运算。已经开发了一些工作来压缩预训练的深度网络,以减少内存占用量以及可能的计算量。在本文中,代替压缩预训练的网络,我们提出了一种采用多线性投影作为主要特征提取器的通用神经网络层结构。与传统的卷积神经网络(CNN)相比,所提出的体系结构所需的内存少了几倍,同时继承了CNN的类似设计原则。此外,所提出的体系结构配备了两种计算方案,可实现减少计算量或可扩展性。实验结果表明,我们的紧凑型投影的效果优于传统的CNN,而所需的参数却少得多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号